52 research outputs found
Prédiction de la distance résiduelle d'un véhicule électrique
Dans l’optique de limiter l’impact environnemental négatif de l’utilisation immodérée du pétrole, le développement de nouveaux moyens de transport plus écoénergétiques est favorisé. Bien qu’elle offr actuellement une autonomie considérablement plus limitée que celle du véhicule à gazoline, la technologie du véhicule électrique est la plus prisée. Liée à la faible autonomie des véhicules électriques, l’angoisse de la panne, un sentiment éprouvé chez les utilisateurs de ce type de véhicules qui a été identifié lors des années 1990, représente actuellement une barrière socio-technologique importante au développement de la technologie du véhicule électrique. Afin de limiter les risques de panne d'énergie et dans le but de renforcer la confiance du conducteur face à cette nouvelle technologie de véhicule, une méthode précise de prédiction de la distance résiduelle est proposée. Cette méthode originale utilise un algorithme de prédiction du trajet basé sur l'identification des virages à gauche et à droite. Sachant que le type de trajet parcouru influence de façon importante la distance résiduelle, cette stratégie de prédiction offr une amélioration de la précision face aux autres couramment utilisées. La pertinence de cette nouvelle méthode de prédiction sera démontrée à la fois en situation de simulation et d’expérimentation
3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation
Global registration of heterogeneous ground and aerial mapping data is a
challenging task. This is especially difficult in disaster response scenarios
when we have no prior information on the environment and cannot assume the
regular order of man-made environments or meaningful semantic cues. In this
work we extensively evaluate different approaches to globally register UGV
generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud
maps from vision sensors. The approaches are realizations of different
selections for: a) local features: key-points or segments; b) descriptors:
FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR.
Additionally, we compare the results against standard approaches like applying
ICP after a good prior transformation has been given. The evaluation criteria
include the distance which a UGV needs to travel to successfully localize, the
registration error, and the computational cost. In this context, we report our
findings on effectively performing the task on two new Search and Rescue
datasets. Our results have the potential to help the community take informed
decisions when registering point-cloud maps from ground robots to those from
aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on
Safety, Security, and Rescue Robotics 2017 (SSRR 2017
OREOS: Oriented Recognition of 3D Point Clouds in Outdoor Scenarios
We introduce a novel method for oriented place recognition with 3D LiDAR
scans. A Convolutional Neural Network is trained to extract compact descriptors
from single 3D LiDAR scans. These can be used both to retrieve near-by place
candidates from a map, and to estimate the yaw discrepancy needed for
bootstrapping local registration methods. We employ a triplet loss function for
training and use a hard-negative mining strategy to further increase the
performance of our descriptor extractor. In an evaluation on the NCLT and KITTI
datasets, we demonstrate that our method outperforms related state-of-the-art
approaches based on both data-driven and handcrafted data representation in
challenging long-term outdoor conditions
3D Localization, Mapping and Path Planning for Search and Rescue Operations
This work presents our results on 3D robot localization, mapping and path planning for the latest joint exercise of the European project 'Long-Term Human-Robot Teaming for Robots Assisted Disaster Response (TRADR). The full system is operated and evaluated by firemen end-users in real-world search and rescue experiments. We demonstrate that the system is able to plan a path to a goal position desired by the fireman operator in the TRADR Operational Control Unit (OCU), using a persistent 3D map created by the robot during previous sorties
SegMap: 3D Segment Mapping using Data-Driven Descriptors
When performing localization and mapping, working at the level of structure
can be advantageous in terms of robustness to environmental changes and
differences in illumination. This paper presents SegMap: a map representation
solution to the localization and mapping problem based on the extraction of
segments in 3D point clouds. In addition to facilitating the computationally
intensive task of processing 3D point clouds, working at the level of segments
addresses the data compression requirements of real-time single- and
multi-robot systems. While current methods extract descriptors for the single
task of localization, SegMap leverages a data-driven descriptor in order to
extract meaningful features that can also be used for reconstructing a dense 3D
map of the environment and for extracting semantic information. This is
particularly interesting for navigation tasks and for providing visual feedback
to end-users such as robot operators, for example in search and rescue
scenarios. These capabilities are demonstrated in multiple urban driving and
search and rescue experiments. Our method leads to an increase of area under
the ROC curve of 28.3% over current state of the art using eigenvalue based
features. We also obtain very similar reconstruction capabilities to a model
specifically trained for this task. The SegMap implementation will be made
available open-source along with easy to run demonstrations at
www.github.com/ethz-asl/segmap. A video demonstration is available at
https://youtu.be/CMk4w4eRobg
- …